Goto

Collaborating Authors

 Solano County




Tech Billionaires Already Captured the White House. They Still Want to Be Kings

WIRED

From Montenegro to northern California, the tech elite dream of building cities where they make the rules. Is this, finally, their moment? The shirtless man in the golden mask and cape has plans to lead his own country one day. There is no location yet, but it will be a crypto-and AI-powered paradise of medical experimentation, filled with people who want to "make death optional," he says. For now, though, he's leading a sparsely attended rave on the second floor of a San Francisco office building. A DJ is spinning at one end of an open room. A handful of people sway and jump on the space cleared out as a dance floor. At a nearby table, coffee is available with many alternative milks.


Billionaires dream of building utopian techno-city in Greenland

Popular Science

A handful of wealthy, politically connected Silicon Valley investors are reportedly eyeing Greenland's icy shores as the site for a techno-utopian "freedom city." That's according to a report from Reuters, which details a proposed effort to establish a new, libertarian-minded municipality characterized by minimal corporate regulation and a focus on accelerating emerging technologies like AI and mini nuclear reactors. Supporters of increased economic development in Greenland argue its frigid climate could naturally cool massive, energy intensive AI data centers. Large deposits of critical and rare earth minerals buried beneath the island's ice sheets could also potentially be used to manufacture consumer electronics. The so-called "start-up city"--which bears similarities to another ongoing venture in California's Solano County--reportedly already has the backing of PayPal founder Peter Thiel and Ken Howery, President Donald Trump's pick for Denmark ambassador.


Judge orders leaders of cult-like 'Zizian' group to be held without bail

Al Jazeera

A Maryland court has ordered a blogger known as "Ziz", who leads a cult-like group connected to six killings, to be held without bail. The blogger, Jack LaSota, 34, of Berkeley, California, was arrested Sunday along with Michelle Zajko, 32, of Media, Pennsylvania, and Daniel Blank, 26, of Sacramento, California. The Zizians, as the group are known after their apparent leader, have been tied to the killing of a United States Border Patrol agent David Maland last month near the Canadian border, as well as five other killings in three states. LaSota, Zajko and Blank were arrested in Frostburg, Maryland, on Sunday afternoon. The judge in the case ordered LaSota to be held without bail, citing concerns about her being a flight risk and a danger to public safety.


SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence

Liu, Zhining, Amjad, Rana Ali, Adkathimar, Ravinarayana, Wei, Tianxin, Tong, Hanghang

arXiv.org Artificial Intelligence

Providing Language Models (LMs) with relevant evidence in the context (either via retrieval or user-provided) can significantly improve their ability to provide factually correct grounded responses. However, recent studies have found that LMs often struggle to fully comprehend and utilize key evidence from the context, especially when it contains noise and irrelevant information - an issue common in real-world scenarios. To address this, we propose SelfElicit, an inference-time approach that helps LMs focus on key contextual evidence through self-guided explicit highlighting. By leveraging the inherent evidence-finding capabilities of LMs using the attention scores of deeper layers, our method automatically identifies and emphasizes key evidence within the input context, facilitating more accurate and factually grounded responses without additional training or iterative prompting. We demonstrate that SelfElicit brings consistent and significant improvement on multiple evidence-based QA tasks for various LM families while maintaining computational efficiency. Our code and documentation are available at https://github.com/ZhiningLiu1998/SelfElicit.


EVOLvE: Evaluating and Optimizing LLMs For Exploration

Nie, Allen, Su, Yi, Chang, Bo, Lee, Jonathan N., Chi, Ed H., Le, Quoc V., Chen, Minmin

arXiv.org Artificial Intelligence

Despite their success in many domains, large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty. This is crucial as many real-world applications, ranging from personalized recommendations to healthcare interventions, demand that LLMs not only predict but also actively learn to make optimal decisions through exploration. In this work, we measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications. We develop a comprehensive suite of environments, including both context-free and contextual bandits with varying task difficulties, to benchmark LLMs' performance. Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs: by providing explicit algorithm-guided support during inference; and through algorithm distillation via in-context demonstrations and fine-tuning, using synthetic data generated from these algorithms. Impressively, these techniques allow us to achieve superior exploration performance with smaller models, surpassing larger models on various tasks. We conducted an extensive ablation study to shed light on various factors, such as task difficulty and data representation, that influence the efficiency of LLM exploration. Additionally, we conduct a rigorous analysis of the LLM's exploration efficiency using the concept of regret, linking its ability to explore to the model size and underlying algorithm.


MoqaGPT : Zero-Shot Multi-modal Open-domain Question Answering with Large Language Model

Zhang, Le, Wu, Yihong, Mo, Fengran, Nie, Jian-Yun, Agrawal, Aishwarya

arXiv.org Artificial Intelligence

Multi-modal open-domain question answering typically requires evidence retrieval from databases across diverse modalities, such as images, tables, passages, etc. Even Large Language Models (LLMs) like GPT-4 fall short in this task. To enable LLMs to tackle the task in a zero-shot manner, we introduce MoqaGPT, a straightforward and flexible framework. Using a divide-and-conquer strategy that bypasses intricate multi-modality ranking, our framework can accommodate new modalities and seamlessly transition to new models for the task. Built upon LLMs, MoqaGPT retrieves and extracts answers from each modality separately, then fuses this multi-modal information using LLMs to produce a final answer. Our methodology boosts performance on the MMCoQA dataset, improving F1 by +37.91 points and EM by +34.07 points over the supervised baseline. On the MultiModalQA dataset, MoqaGPT surpasses the zero-shot baseline, improving F1 by 9.5 points and EM by 10.1 points, and significantly closes the gap with supervised methods. Our codebase is available at https://github.com/lezhang7/MOQAGPT.


AI Could Change How Blind People See the World

WIRED

For her 38th birthday, Chela Robles and her family made a trek to One House, her favorite bakery in Benicia, California, for a brisket sandwich and brownies. On the car ride home, she tapped a small touchscreen on her temple and asked for a description of the world outside. "A cloudy sky," the response came back through her Google Glass. Robles lost the ability to see in her left eye when she was 28, and in her right eye a year later. Blindness, she says, denies you small details that help people connect with one another, like facial cues and expressions.


Matrix diagonalization and singular value decomposition: Static SageMath and dynamic ChatGPT juxtaposed

Karjanto, N.

arXiv.org Artificial Intelligence

We investigated some difficulties that students often face when studying linear algebra at the undergraduate level, and identified some common mistakes and difficulties they often encountered when dealing with topics that require algorithmic thinking skills such as matrix factorization. In particular, we focused on (orthogonal) diagonalization and singular value decomposition (SVD). We also offered the possibility of exploring these topics using SageMath, a Python-based free open software computer algebra system (CAS) that has been identified to be useful for assisting many students in the computational process even though its output is static by nature. We then explored dynamic ChatGPT by inquiring the chatbot about the topic, either by asking to provide an example or to solve a problem, that is by constructing an (orthogonal) diagonalization or SVD from a particular matrix. By consolidating essential concepts in linear algebra and improving computational skills through effective practice, mastering these topics would become easier and mistakes could be minimized. Static SageMath, in particular, is a great aid for calculation confirmation and handling tedious computations. Although dynamic ChatGPT is relatively unreliable for solving problems in linear algebra, the mistakes it produces could become a valuable tool for improving critical thinking skills.